Skip to main content

Raised Fingers

Captures the raised fingers as user input whilst the user is stood back from the phone. This feature is expected to be used for capturing small numeric values, e.g.: rate your experience from 1 to 5.

Raised Fingers

Feature

We suggest auto-detecting which hand is raised as this reduces confusion for users. The slight caveat for this approach is if both hands are present QuickPose defaults to the right hand.

case raisedFingers() // auto detects raised hand, defaulting to right if both are raised
case raisedFingers(side: .left) // left hand only
case raisedFingers(side: .right) // right hand only
case raisedFingers(style: customStyled) // with custom style

Basic Implementation

To show results you'll need to modify your view ZStack, which is we assume is setup as described in the Getting Started Guide:

ZStack(alignment: .top) {
QuickPoseCameraView(useFrontCamera: true, delegate: quickPose)
QuickPoseOverlayView(overlayImage: $overlayImage)
}

The basic implementation will require displaying some text to the screen, start with declaring this value in your swiftui view.

@State private var feedbackText: String? = nil

And show this feedback text as an overlay to the view in your branding.

ZStack(alignment: .top) {
QuickPoseCameraView(useFrontCamera: true, delegate: quickPose)
QuickPoseOverlayView(overlayImage: $overlayImage)
}
.overlay(alignment: .center) {
if let feedbackText = feedbackText {
Text(feedbackText)
.font(.system(size: 26, weight: .semibold)).foregroundColor(.white).multilineTextAlignment(.center)
.padding(16)
.background(RoundedRectangle(cornerRadius: 8).foregroundColor(Color("AccentColor").opacity(0.8)))
.padding(.bottom, 40)
}
}

Note the above use of alignment in .overlay(alignment: .center), you can modify this to move the overlay around easily to say the bottom: .overlay(alignment: .bottom).

For this basic version it fills the feedback text with the Raised Finger result as a percentage, and hides the text when the feature result is not available.

quickPose.start(features: [.raisedFingers()], onFrame: { status, image, features, feedback, landmarks in
switch status {
case .success:
overlayImage = image
if let result = features.values.first {
feedbackText = result.stringValue
} else {
feedbackText = nil
}
case .noPersonFound:
feedbackText = "Stand in view";
case .sdkValidationError:
feedbackText = "Be back soon";
}
})

Conditional Styling

To give user feedback consider using conditional styling so that when the user's measurement goes above a threshold, here 0.8, a green highlight is shown.

let greenHighlightStyle = QuickPose.Style(conditionalColors: [QuickPose.Style.ConditionalColor(min: 0.8, max: nil, color: UIColor.green)])
quickPose.start(features: [.raisedFingers(style: customOrConditionalStyle)],
onFrame: { status, image, features, feedback, landmarks in ...
})

Improving the Captured Results

The basic implementation above would likely capture an incorrect value, as in the real world users need time to understand what they are doing, change their mind, or QuickPose can simply get an incorrect value due to poor lighting or the user's stance. These issues are partially mitigated by on-screen feedback, but it's best to use an QuickPoseDoubleUnchangedDetector to keep reading the values until the user has settled on a final answer.

To steady the .raisedFingers() results declare a configurable Unchanged detector, which can be used to turn lots of our input features to read more reliably.

@State private var unchanged = QuickPoseDoubleUnchangedDetector(similarDuration: 2)

This will on trigger the callback block when the result has stayed the same for 2 seconds, the above has the default leniency, but this can be modified in the constructor.

@State private var unchanged = QuickPoseDoubleUnchangedDetector(similarDuration: 2, leniency: Double = 0.2) // changed to 20% leniency

The unchanged detector is added to your onFrame callback, and is updated every time a result is found, triggering its onChange callback only when the result has not changed for the specified duration.

quickPose.start(features: [.raisedFingers()], onFrame: { status, image, features, feedback, landmarks in                
switch status {
case .success:
overlayImage = image
if let result = features.values.first {
feedbackText = result.stringValue
unchanged.count(result: result.value) {
print("Final Result \(result.value)")
// your code to save result
}
} else {
feedbackText = nil // blank if no hand detected
}
case .noPersonFound:
feedbackText = "Stand in view";
case .sdkValidationError:
feedbackText = "Be back soon";
}
})

Improving Guidance

Despite the improvements above, the user doesn't have clear instructions to know what to do, this can be fixed by adding user guidance.

guidance-example

Our recommended pattern is to use an enum to capture all the states in your application.

enum ViewState: Equatable {
case intro
case measuring(score: Int)
case completed(score: Int)
case error(_ prompt: String)

var prompt: String? {
switch self {
case .intro:
return "How cool is this?\n On a scale of 0 to 5?"
case .measuring(let score):
return "It's \(score)/5 Cool?"
case .completed(let score):
return "Thank you\nIt's \(score)/5 Cool"
case .error(let prompt):
return prompt
}
}
var features: [QuickPose.Feature]{
switch self {
case .intro, .measuring:
return [.raisedFingers()]
case .completed, .error:
return []
}
}
}

Alongside the states we also provide a prompt text, which instructs the user at each step, and similarly the features property instructs which features to pass to QuickPose, note for the completed state QuickPose doesn't process any features.

Declare this so your SwiftUI views can access it, starting in the .intro state, our exmaple is simplifited to just demonstrate the pattern, as you would typically start with more positioning guidance.

@State private var state: ViewState = .intro

Next make some modifications, so that your feedbackText is pulled from the state prompt by default

.overlay(alignment: .center) {
if let feedbackText = state.prompt {
Text(feedbackText)
.font(.system(size: 26, weight: .semibold)).foregroundColor(.white).multilineTextAlignment(.center)
.padding(16)
.background(RoundedRectangle(cornerRadius: 8).foregroundColor(Color("AccentColor").opacity(0.8)))
.padding(.bottom, 40)
}
}

This now means you can remove the feedbackText declaration:

//@State private var feedbackText: String? = nil // remove the feedbackText

There's two changes we need to make, first we need to update quickpose with the features for each state:

.onChange(of: state) { _ in
quickPose.update(features: state.features)
}

Then we should start quickpose from the state's features as well.

.onAppear {
quickPose.start(features: state.features, onFrame: { status, image, features, feedback, landmarks in
...

And in the onFrame callback update the state instead of the feedbackText. This allows the UI input to change the view state in a controlled manner, so that for example the .intro state can only be accessed if the user's hand is missing from the .measuring state, or from the .error state.

quickPose.start(features: state.features, onFrame: { status, image, features, feedback, landmarks in
switch status {
case .success:
overlayImage = image
if let result = features.values.first {
state = .measuring(score: Int(result.value))
unchanged.count(result: result.value) {
state = .completed(score: Int(result.value))
// your code to save result
}
} else if case .measuring = state {
state = .intro
} else if case .error = state {
state = .intro
}
case .noPersonFound:
state = .error("Stand in view")
case .sdkValidationError:
state = .error("Be back soon")
}
})